Menu Top
Classwise Concept with Examples
6th 7th 8th 9th 10th 11th 12th

Class 11th Chapters
1. Sets 2. Relations and Functions 3. Trigonometric Functions
4. Principle of Mathematical Induction 5. Complex Numbers and Quadratic Equations 6. Linear Inequalities
7. Permutations and Combinations 8. Binomial Theorem 9. Sequences and Series
10. Straight Lines 11. Conic Sections 12. Introduction to Three Dimensional Geometry
13. Limits and Derivatives 14. Mathematical Reasoning 15. Statistics
16. Probability

Content On This Page
Random Experiments and Sample SPaces Events Axiomatic Approach to Probability
Laws of Probability


Chapter 16 Probability (Concepts)

Welcome to this chapter where we embark on a more formal and mathematically rigorous exploration of Probability. Building upon the classical definition and experimental approaches encountered previously, we now delve into the axiomatic foundation of probability theory. This approach, pioneered by the Russian mathematician Andrey Kolmogorov, provides a more robust and general framework that applies even when outcomes are not necessarily equally likely, underpinning modern probability theory and its vast applications in science, finance, and beyond. Our goal is to understand probability not just as a ratio, but as a function satisfying specific fundamental rules or axioms.

We begin by revisiting the essential terminology with greater precision. A random experiment is any process that yields an outcome which cannot be predicted with certainty. The set of all possible outcomes of a random experiment is called the sample space, universally denoted by $\mathbf{S}$. Each individual outcome is an element of $S$. An event, typically denoted by $E$, $F$, etc., is defined formally as any subset of the sample space $S$ ($E \subseteq S$). An event represents a collection of one or more possible outcomes that we might be interested in. For instance, when rolling a standard die, the sample space is $S = \{1, 2, 3, 4, 5, 6\}$. The event 'rolling an even number' corresponds to the subset $E = \{2, 4, 6\}$.

The cornerstone of this chapter is the Axiomatic Approach to Probability. Instead of defining probability solely based on counting outcomes, this approach defines probability as a function, $P$, which assigns a real number $P(E)$ (the probability of event $E$) to every event $E$ in the sample space $S$. This probability function $P$ must satisfy three fundamental axioms:

From these three simple axioms, we can logically deduce several crucial properties of probability:

Furthermore, we derive the important Addition Rule for Probability which applies to any two events $E$ and $F$ (whether mutually exclusive or not): $$ \mathbf{P(E \cup F) = P(E) + P(F) - P(E \cap F)} $$ This formula accounts for the potential overlap (intersection $E \cap F$) between the two events.

While the axiomatic approach is general, it reconnects with the classical definition in the special case where the sample space $S$ consists of $n$ equally likely outcomes. In such scenarios, the axiomatic approach confirms that the probability of an event $E$ containing $m$ of these favourable outcomes is indeed given by the familiar ratio $\mathbf{P(E) = \frac{m}{n}}$. We will practice applying these axioms and derived rules to calculate probabilities in various contexts – including problems involving coins, dice, card draws from standard 52-card decks (requiring knowledge of suits, colours, face cards), and selections of objects. These problems often involve careful identification of the sample space $S$, the event set $E$ as a subset of $S$, and frequently require the use of counting techniques (permutations and combinations) to determine the number of favourable and total outcomes before applying the probability rules.



Random Experiments and Sample Spaces

Probability is the mathematical study of chance and uncertainty. It provides tools to quantify the likelihood of different outcomes occurring in situations where the result is not predictable. The fundamental concepts in probability theory are those of a random experiment and its sample space.

To understand probability, it is essential to first be clear about the precise meaning of the terms used. The entire theory is built upon the distinction between processes that are predictable and those that involve an element of chance.


1. Experiment

In the context of science and mathematics, an Experiment is a well-defined procedure or action that can be repeated and results in a set of observable outcomes. It is a general term for any process that generates data or observations.

Experiments can be broadly classified into two categories: deterministic and random.


2. Deterministic Experiment

A Deterministic Experiment is an experiment whose outcome is certain and predictable if it is performed under identical conditions. There is no element of chance involved. The result is uniquely determined by the conditions under which the experiment is performed.

Characteristics:

Examples:


3. Random Experiment (or Probabilistic/Stochastic Experiment)

A Random Experiment is an experiment where the outcome cannot be predicted with certainty in advance, even if the experiment is repeated under the same conditions. While the specific outcome of a single trial is unknown, the set of all possible outcomes is known.

Characteristics:

Examples:

The study of probability is concerned exclusively with random experiments.


Outcome and Sample Space:

An Outcome is a single possible result of a random experiment.

The Sample Space of a random experiment is the set of all possible outcomes of the experiment. It represents the complete set of potential results. The sample space is typically denoted by the capital letter $S$ or sometimes $\Omega$. Each individual outcome in the sample space is called a sample point.

The sample space must be defined such that:


Examples of Sample Spaces:

Let's determine the sample space for various random experiments:

  1. Tossing a coin: The only possible results are getting a Head (H) or getting a Tail (T).

    Sample Space, $S = \{H, T\}$

    The number of sample points is $n(S) = 2$.
  2. Rolling a standard die: The possible outcomes are the faces showing the numbers 1, 2, 3, 4, 5, or 6.

    Sample Space, $S = \{1, 2, 3, 4, 5, 6\}$

    The number of sample points is $n(S) = 6$.
  3. Tossing two coins simultaneously: We need to list all possible ordered pairs of outcomes for the two coins. Let the outcomes be for the first coin and the second coin.
    • First coin H, Second coin H: HH
    • First coin H, Second coin T: HT
    • First coin T, Second coin H: TH
    • First coin T, Second coin T: TT

    Sample Space, $S = \{HH, HT, TH, TT\}$

    The number of sample points is $n(S) = 4$.
  4. Rolling two dice simultaneously: We consider the two dice distinguishable (e.g., Die 1 and Die 2). The outcome is an ordered pair $(d_1, d_2)$, where $d_1$ is the result on Die 1 and $d_2$ is the result on Die 2. Each $d_i$ can be any integer from 1 to 6.

    $S = \{(1,1), (1,2), (1,3), (1,4), (1,5), (1,6),$

    $\phantom{S = } (2,1), (2,2), (2,3), (2,4), (2,5), (2,6),$

    $\phantom{S = } (3,1), (3,2), (3,3), (3,4), (3,5), (3,6),$

    $\phantom{S = } (4,1), (4,2), (4,3), (4,4), (4,5), (4,6),$

    $\phantom{S = } (5,1), (5,2), (5,3), (5,4), (5,5), (5,6),$

    $\phantom{S = } (6,1), (6,2), (6,3), (6,4), (6,5), (6,6)\}$

    The number of sample points is $n(S) = 6 \times 6 = 36$. (Using the Multiplication Principle from Counting).
  5. Drawing two balls from a bag containing a red (R) and a blue (B) ball, one after the other without replacement: The order of drawing matters.

    Sample Space, $S = \{RB, BR\}$

    $n(S) = 2$.
  6. Drawing two balls from a bag containing a red (R) and a blue (B) ball, one after the other with replacement: The order matters, and the possibilities for the second draw depend on the first due to replacement.

    Sample Space, $S = \{RR, RB, BR, BB\}$

    $n(S) = 4$.

Examples of Sample Spaces

Example 1. Describe the sample space for the experiment of tossing a single fair coin.

Answer:

When a single coin is tossed, there are only two possible outcomes: either the coin shows a Head (H) or a Tail (T).

Therefore, the sample space is the set of these two outcomes.

Sample Space, $S = \{H, T\}$

The number of sample points is $n(S) = 2$.


Example 2. What is the sample space when a standard six-sided die is rolled once?

Answer:

A standard die has six faces marked with the numbers 1, 2, 3, 4, 5, and 6. When the die is rolled, any one of these numbers can appear on the top face.

The set of all possible outcomes is the sample space.

Sample Space, $S = \{1, 2, 3, 4, 5, 6\}$

The number of sample points is $n(S) = 6$.


Example 3. Find the sample space associated with the experiment of tossing two coins simultaneously (or one coin twice).

Answer:

Let H denote a Head and T denote a Tail. When two coins are tossed, we need to consider the outcomes on both coins. The possible outcomes are:

  • Head on the first coin and Head on the second coin (HH).
  • Head on the first coin and Tail on the second coin (HT).
  • Tail on the first coin and Head on the second coin (TH).
  • Tail on the first coin and Tail on the second coin (TT).

The sample space is the set of all these possible ordered pairs.

Sample Space, $S = \{HH, HT, TH, TT\}$

The number of sample points is $n(S) = 4$.


Example 4. A coin is tossed. If it shows a head, a die is thrown. If it shows a tail, the experiment is stopped. Describe the sample space for this experiment.

Answer:

We analyze the two possible cases for the coin toss:

Case 1: The coin shows a Tail (T).

According to the problem, the experiment stops. So, T is one possible outcome.

Case 2: The coin shows a Head (H).

The experiment continues, and a die is thrown. The possible outcomes for the die are {1, 2, 3, 4, 5, 6}. This gives us six possible combined outcomes: (H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6).

Combining all possible outcomes from both cases gives the sample space.

Sample Space, $S = \{T, (H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6)\}$

The number of sample points is $n(S) = 7$.


Example 5. Write the sample space for the random experiment of rolling a pair of dice.

Answer:

When a pair of dice is rolled, we can consider them as distinguishable (e.g., a red die and a blue die). The outcome of the experiment is an ordered pair $(d_1, d_2)$, where $d_1$ is the number on the first die and $d_2$ is the number on the second die. Each die can show a number from 1 to 6.

The sample space is the set of all possible ordered pairs:

$S = \{$

$(1,1), (1,2), (1,3), (1,4), (1,5), (1,6),$

$(2,1), (2,2), (2,3), (2,4), (2,5), (2,6),$

$(3,1), (3,2), (3,3), (3,4), (3,5), (3,6),$

$(4,1), (4,2), (4,3), (4,4), (4,5), (4,6),$

$(5,1), (5,2), (5,3), (5,4), (5,5), (5,6),$

$(6,1), (6,2), (6,3), (6,4), (6,5), (6,6)$

$\}$

The total number of possible outcomes (sample points) can be found using the multiplication principle: $6$ outcomes for the first die $\times$ $6$ outcomes for the second die.

The number of sample points is $n(S) = 6 \times 6 = 36$.


The sample space can be finite (having a finite number of outcomes) or infinite. In introductory probability, we primarily focus on experiments with finite sample spaces. Examples of experiments with infinite sample spaces include counting the number of coin tosses until a head appears (H, TH, TTH, ...) which is countably infinite, or measuring a continuous quantity like time or temperature which can result in uncountably infinite outcomes.



Events

In probability theory, once we have defined a random experiment and its sample space (the set of all possible outcomes), the next crucial concept is that of an event. If the sample space is the "universe" of all possibilities for an experiment, an event is a specific region or a point of interest within that universe. It is the language we use to describe the specific results we want to analyze and calculate probabilities for.


Definition of an Event

An Event is defined as any subset of the sample space S. This simple definition is very powerful. It means that an event is simply a collection of one or more possible outcomes. If the result of our experiment is one of the outcomes included in the event's set, we say that the event has occurred.

Example: Consider the experiment of rolling a single die. The sample space is the set of all possible outcomes, $S = \{1, 2, 3, 4, 5, 6\}$.


Types of Events

Events can be categorized based on the number and nature of outcomes they contain. This classification helps us understand the structure of what we are measuring.


Exhaustive and Favourable Number of Cases

These two terms are fundamental to calculating probability in the classical approach.

1. Exhaustive Number of Cases

The total number of all possible outcomes of a random experiment is called the exhaustive number of cases. It is simply the cardinality (or the total number of elements) of the sample space S. It represents every single thing that can possibly happen.

It is denoted by $n(S)$.

Example: In the experiment of rolling a single die, the sample space is $S = \{1, 2, 3, 4, 5, 6\}$. The exhaustive number of cases is $n(S) = 6$.

2. Favourable Number of Cases

The number of outcomes of a random experiment that result in the happening of a particular event is called the favourable number of cases for that event. It is the cardinality (or the total number of elements) of the event set E.

It is denoted by $n(E)$.

Example: Continuing with the die roll experiment, let E be the event 'getting an even number'. The outcomes favourable to E are {2, 4, 6}. Therefore, the set for event E is $E = \{2, 4, 6\}$, and the favourable number of cases is $n(E) = 3$.

These two concepts form the basis of the classical probability formula: $P(E) = \frac{\text{Favourable Number of Cases}}{\text{Exhaustive Number of Cases}} = \frac{n(E)}{n(S)}$.


The Algebra of Events

Because events are sets, we can use the operations of set theory to combine or modify them, creating new events. This "algebra of events" allows us to precisely describe complex scenarios.

Let A and B be two events associated with a sample space S.

1. The 'OR' Event (Union: $A \cup B$)

The event 'A or B' (also written as 'A or B or both' or 'at least one of A or B') represents the set of all outcomes that are either in A, in B, or in both. This corresponds to the union of sets.

$A \cup B = \{x : x \in A \text{ or } x \in B\}$

Venn diagram showing the union of two events A and B. The entire area of both circles is shaded.

2. The 'AND' Event (Intersection: $A \cap B$)

The event 'A and B' represents the set of all outcomes that are common to both A and B, meaning they are in A and also in B. This corresponds to the intersection of sets.

$A \cap B = \{x : x \in A \text{ and } x \in B\}$

Venn diagram showing the intersection of two events A and B. The overlapping area of the two circles is shaded.

3. The 'NOT' Event (Complement: $A'$ or $A^c$)

The event 'not A' represents the set of all outcomes in the sample space S that are not in event A. This corresponds to the complement of a set.

$A' = S - A = \{x : x \in S \text{ and } x \notin A\}$

Venn diagram showing the complement of an event A. The area inside the sample space S but outside circle A is shaded.

4. The Event 'A but not B' (Difference: $A - B$)

The event 'A but not B' represents the set of all outcomes that are in event A but are not in event B. This corresponds to the difference of sets. It can also be written as $A \cap B'$.

$A - B = \{x : x \in A \text{ and } x \notin B\}$

Venn diagram showing the difference of events A - B. The area of circle A that does not overlap with circle B is shaded.

Relationships Between Events

Mutually Exclusive Events

Two or more events are mutually exclusive (or disjoint) if they cannot happen at the same time. This means they have no outcomes in common. In the language of sets, their intersection is the empty set.

$A \cap B = \phi$

Venn diagram showing two mutually exclusive events, A and B, as two non-overlapping circles inside the sample space S.

Example: When rolling a die, the event "getting an even number" ($A = \{2, 4, 6\}$) and the event "getting an odd number" ($B = \{1, 3, 5\}$) are mutually exclusive. It is impossible for a single roll to be both even and odd, so $A \cap B = \phi$.

Exhaustive Events

A collection of events ($E_1, E_2, \dots, E_k$) is exhaustive if their union forms the entire sample space S. This means that in any trial of the experiment, at least one of these events is guaranteed to occur. They "exhaust" all possibilities.

$E_1 \cup E_2 \cup \dots \cup E_k = S$

Example: For a die roll, the events "getting a number less than 4" ($A = \{1, 2, 3\}$) and "getting a number greater than 2" ($B = \{3, 4, 5, 6\}$) are exhaustive because their union $A \cup B = \{1, 2, 3, 4, 5, 6\} = S$. Note that these events are not mutually exclusive because they share the outcome '3'.

Mutually Exclusive and Exhaustive Events

A collection of events is both mutually exclusive and exhaustive if they do not overlap and together they cover the entire sample space. They form a partition of the sample space.

Example: For a die roll, the events "getting an even number" ($A = \{2, 4, 6\}$) and "getting an odd number" ($B = \{1, 3, 5\}$) are both mutually exclusive ($A \cap B = \phi$) and exhaustive ($A \cup B = S$).


Example 1. A single die is rolled. Let A be the event 'getting a prime number', B be the event 'getting an odd number', and C be the event 'getting a number greater than 3'. Describe the events:

(i) $A \text{ and } B$      (ii) $A \text{ or } B$      (iii) $B \text{ and } C$      (iv) not C ($C'$)      (v) $A - C$

Answer:

Given:

Experiment: Rolling a single die. The sample space is $S = \{1, 2, 3, 4, 5, 6\}$.

Event A (prime number): $A = \{2, 3, 5\}$

Event B (odd number): $B = \{1, 3, 5\}$

Event C (number > 3): $C = \{4, 5, 6\}$

Solution:

(i) $A \text{ and } B$ ($A \cap B$): We need outcomes that are in both A and B (prime and odd).

$A \cap B = \{2, 3, 5\} \cap \{1, 3, 5\} = \{3, 5\}$

(ii) $A \text{ or } B$ ($A \cup B$): We need outcomes that are in A or B or both (prime or odd).

$A \cup B = \{2, 3, 5\} \cup \{1, 3, 5\} = \{1, 2, 3, 5\}$

(iii) $B \text{ and } C$ ($B \cap C$): We need outcomes that are in both B and C (odd and greater than 3).

$B \cap C = \{1, 3, 5\} \cap \{4, 5, 6\} = \{5\}$

(iv) not C ($C'$): We need all outcomes in S that are not in C (not greater than 3, i.e., less than or equal to 3).

$C' = S - C = \{1, 2, 3, 4, 5, 6\} - \{4, 5, 6\} = \{1, 2, 3\}$

(v) $A - C$: We need outcomes that are in A but not in C (prime but not greater than 3).

$A - C = \{2, 3, 5\} - \{4, 5, 6\} = \{2, 3\}$


Example 2. Two dice are rolled. Let A be the event "the sum of the numbers is 7", and B be the event "the numbers shown are equal". Are A and B mutually exclusive?

Answer:

Given:

Experiment: Rolling two dice. The sample space S has 36 outcomes, from (1,1) to (6,6).

Event A: "the sum is 7".

Event B: "the numbers are equal" (a doublet).

Solution:

Step 1: List the outcomes for each event.

For event A (sum is 7), the possible outcomes are:

$A = \{(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}$

For event B (numbers are equal), the possible outcomes are:

$B = \{(1,1), (2,2), (3,3), (4,4), (5,5), (6,6)\}$

Step 2: Find the intersection of A and B.

To check if the events are mutually exclusive, we need to find their intersection, $A \cap B$. We look for outcomes that are common to both sets.

By comparing the sets A and B, we can see that there are no common outcomes.

$A \cap B = \phi$

Conclusion:

Since the intersection of events A and B is the empty set, they have no outcomes in common. Therefore, events A and B are mutually exclusive. It is impossible to roll two dice where the numbers are the same and their sum is 7.



Axiomatic Approach to Probability

Early approaches to probability, such as the classical definition ($P(E) = n(E)/n(S)$), were intuitive but had limitations. The classical approach, for example, only works for experiments with a finite number of equally likely outcomes. To create a universally applicable and mathematically consistent theory of probability, the Russian mathematician Andrey Kolmogorov developed the axiomatic approach in the 1930s. This approach defines probability not by a specific formula but by a set of fundamental rules, or axioms, that any valid assignment of probabilities must satisfy. This foundation is so robust that it works for any type of random experiment, whether the outcomes are equally likely or not, and whether the sample space is finite or infinite.


The Three Axioms of Probability

The axiomatic approach begins with a random experiment having a sample space, $S$. Probability is then defined as a function, $P$, that assigns a real number, $P(E)$, to every event $E$ (a subset of $S$). For this function to be a valid probability measure, it must satisfy the following three axioms:

Axiom 1: Non-negativity

The probability of any event must be a non-negative number.

For any event E, $P(E) \ge 0$.

This axiom formalizes our intuition that the chance of something happening cannot be negative. It can be zero (for an impossible event) or positive.

Axiom 2: Normalization

The probability of the entire sample space $S$ (the "sure event," since one of the outcomes in S must occur) is exactly 1.

$P(S) = 1$

This axiom normalizes probability, setting the scale from 0 to 1. A value of 1 represents absolute certainty. The sum of the probabilities of all possible elementary outcomes must equal 1.

Axiom 3: Additivity

If $E_1, E_2, E_3, \dots$ is a sequence of mutually exclusive events (meaning they have no outcomes in common, i.e., $E_i \cap E_j = \emptyset$ for $i \neq j$), then the probability of their union is the sum of their individual probabilities.

$P(E_1 \cup E_2 \cup E_3 \cup \dots) = P(E_1) + P(E_2) + P(E_3) + \dots$

For any two mutually exclusive events, this simplifies to:

If $E_1 \cap E_2 = \emptyset$, then $P(E_1 \cup E_2) = P(E_1) + P(E_2)$.

This axiom is the cornerstone that allows us to calculate the probability of a compound event by breaking it down into simpler, disjoint parts.

Any function $P$ that satisfies these three axioms is considered a valid probability measure, and all other rules of probability can be logically derived from them.


Important Properties Derived from the Axioms

Using only the three axioms, we can prove several fundamental theorems of probability.


Probability for Equally Likely Outcomes

In many simple experiments, such as tossing a fair coin or rolling a fair die, it is reasonable to assume that every individual outcome in the sample space is equally likely. In this specific but common case, the axioms lead to the classical definition of probability.

For a finite sample space $S$ with $n(S)$ equally likely outcomes, the probability of any event $E$ is the ratio of the number of outcomes favourable to $E$ to the total number of outcomes in the sample space.

$P(E) = \frac{\text{Number of Favourable Outcomes}}{\text{Total Number of Possible Outcomes}} = \frac{n(E)}{n(S)}$

... (iii)


Odds in Favour and Odds Against an Event

Sometimes, the likelihood of an event is expressed not as a probability, but as "odds". This is common in betting and gaming.

Odds in Favour

The odds in favour of an event E is the ratio of the number of outcomes favourable to E to the number of outcomes unfavourable to E.

Odds in Favour of E = $\frac{\text{Favourable Outcomes}}{\text{Unfavourable Outcomes}} = \frac{n(E)}{n(E')} = \frac{n(E)}{n(S) - n(E)}$

If $P(E)$ is the probability of the event, then the odds in favour are $\frac{P(E)}{1 - P(E)}$.

Odds Against

The odds against an event E is the ratio of the number of outcomes unfavourable to E to the number of outcomes favourable to E.

Odds Against E = $\frac{\text{Unfavourable Outcomes}}{\text{Favourable Outcomes}} = \frac{n(E')}{n(E)} = \frac{n(S) - n(E)}{n(E)}$

If $P(E)$ is the probability of the event, then the odds against are $\frac{1 - P(E)}{P(E)}$.

Example 1. A card is drawn from a well-shuffled deck of 52 cards. Find the odds in favour of drawing a king.

Answer:

Total number of cards, $n(S) = 52$.

Let E be the event 'drawing a king'.

Number of kings in a deck (favourable outcomes), $n(E) = 4$.

Number of non-kings (unfavourable outcomes), $n(E') = 52 - 4 = 48$.

Odds in favour of drawing a king = $\frac{n(E)}{n(E')} = \frac{4}{48} = \frac{1}{12}$.

This is often stated as "1 to 12".


Example 2. A fair die is rolled. What is the probability of getting a prime number?

Answer:

Step 1: Define the Sample Space.

The set of all possible outcomes is $S = \{1, 2, 3, 4, 5, 6\}$.

The total number of possible outcomes is $n(S) = 6$.

Step 2: Define the Event.

Let E be the event "getting a prime number". The prime numbers on a die are 2, 3, and 5. (Note: 1 is not a prime number).

$E = \{2, 3, 5\}$.

The number of favourable outcomes is $n(E) = 3$.

Step 3: Calculate the Probability.

Since the die is fair, all outcomes are equally likely. We use the formula $P(E) = \frac{n(E)}{n(S)}$.

$P(E) = \frac{3}{6} = \frac{1}{2}$

The probability of getting a prime number is $\frac{1}{2}$.


Example 3. Two fair coins are tossed. What is the probability of getting at least one head?

Answer:

Solution (Method 1: Direct Counting):

Step 1: Define the Sample Space.

$S = \{HH, HT, TH, TT\}$. The total number of outcomes is $n(S) = 4$.

Step 2: Define the Event.

Let E be the event "getting at least one head". This means one head or two heads. The favourable outcomes are:

$E = \{HH, HT, TH\}$.

The number of favourable outcomes is $n(E) = 3$.

Step 3: Calculate the Probability.

$P(E) = \frac{n(E)}{n(S)} = \frac{3}{4}$

Solution (Method 2: Using the Complement):

Sometimes it is easier to calculate the probability of the event *not* happening and subtract from 1.

Step 1: Define the Complementary Event.

Let E be the event "getting at least one head".

The complement, E', is the event "getting no heads".

The only outcome in the sample space with no heads is TT. So, $E' = \{TT\}$.

The number of outcomes for the complement is $n(E') = 1$.

Step 2: Calculate the Probability of the Complement.

$P(E') = \frac{n(E')}{n(S)} = \frac{1}{4}$

Step 3: Calculate the Probability of the Original Event.

Using the rule $P(E) = 1 - P(E')$.

$P(E) = 1 - \frac{1}{4} = \frac{3}{4}$

Both methods give the same answer. The probability of getting at least one head is $\frac{3}{4}$.


Example 4. A bag contains 4 red balls and 6 black balls. Two balls are drawn at random. What is the probability that both balls are red?

Answer:

Solution:

This problem involves combinations, as the order in which the balls are drawn does not matter.

Step 1: Calculate the Total Number of Possible Outcomes, $n(S)$.

There are a total of $4+6=10$ balls in the bag. We are choosing 2 of them.

$n(S) = \text{Number of ways to choose 2 balls from 10} = \binom{10}{2}$

$\binom{10}{2} = \frac{10!}{2!(10-2)!} = \frac{10 \times 9}{2 \times 1} = 45$.

So, there are 45 possible pairs of balls that can be drawn.

Step 2: Calculate the Number of Favourable Outcomes, $n(E)$.

Let E be the event "drawing 2 red balls". We need to choose 2 red balls from the 4 available red balls.

$n(E) = \text{Number of ways to choose 2 red balls from 4} = \binom{4}{2}$

$\binom{4}{2} = \frac{4!}{2!(4-2)!} = \frac{4 \times 3}{2 \times 1} = 6$.

So, there are 6 ways to draw two red balls.

Step 3: Calculate the Probability.

$P(E) = \frac{n(E)}{n(S)} = \frac{6}{45} = \frac{2}{15}$.

The probability of drawing two red balls is $\frac{2}{15}$.



Laws of Probability

The three axioms of probability provide the fundamental foundation of the theory. From these axioms, we can derive several powerful laws that allow us to calculate the probabilities of more complex, combined events. These laws, including the Addition and Multiplication Rules, are the primary tools used in solving probability problems.


Important Properties and Laws Derived from the Axioms

Using only the three axioms, we can prove several fundamental theorems of probability.

Law for Subsets

If event A is a subset of event B (meaning every outcome in A is also in B), then the probability of A is less than or equal to the probability of B.

If $A \subseteq B$, then $P(A) \le P(B)$

Intuitive Explanation: When we add $P(A)$ and $P(B)$, we have double-counted the probability of the overlapping region (the intersection, $A \cap B$). To correct this, we must subtract the probability of this intersection once.

A Venn Diagram showing event A as a circle completely inside a larger circle representing event B.

Derivation

Since $A \subseteq B$, we can express event B as the union of two mutually exclusive events: A and the part of B that is not in A (which is $B - A$).

$B = A \cup (B - A)$

Because A and $(B-A)$ are mutually exclusive, we can apply Axiom 3:

$P(B) = P(A) + P(B - A)$

By Axiom 1 (Non-negativity), we know that the probability of any event must be greater than or equal to zero. Therefore, $P(B - A) \ge 0$.

This implies that $P(B) \ge P(A)$, or $P(A) \le P(B)$.


The Addition Rule of Probability

The Addition Rule (also known as the Sum Rule) is used to find the probability of a union of events. It answers the question: "What is the probability that at least one of a set of events will occur?"

1. Addition Rule for Any Two Events

For any two events A and B in a sample space S, the probability that either event A or event B (or both) will occur is given by the formula:

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

... (ii)

Derivation and Intuition

Imagine calculating the probability of the event $A \cup B$. From set theory, we know that we can partition the union into three mutually exclusive parts: $A - B$, $B - A$, and $A \cap B$.

A Venn Diagram showing two overlapping circles, A and B, and their intersection. The union A U B is the sum of the areas of A and B, minus the intersection to avoid double counting.

So, $A \cup B = (A - B) \cup (B - A) \cup (A \cap B)$. Since these three events are mutually exclusive, by Axiom 3:

$P(A \cup B) = P(A - B) + P(B - A) + P(A \cap B)$

We also know that $A = (A - B) \cup (A \cap B)$. These are mutually exclusive, so $P(A) = P(A - B) + P(A \cap B)$, which gives $P(A - B) = P(A) - P(A \cap B)$.

Similarly, $P(B - A) = P(B) - P(A \cap B)$.

Substituting these back into our equation for $P(A \cup B)$:

$P(A \cup B) = (P(A) - P(A \cap B)) + (P(B) - P(A \cap B)) + P(A \cap B)$

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

The intuition is simple: when we add $P(A)$ and $P(B)$, we have counted the probability of the intersection, $P(A \cap B)$, twice. Therefore, we must subtract it once to get the correct total probability.

2. Addition Rule for Mutually Exclusive Events

When two events, A and B, are mutually exclusive, they cannot happen at the same time. This means their intersection is the empty set ($A \cap B = \emptyset$), and consequently, the probability of their intersection is zero, $P(A \cap B) = 0$. In this special case, the Addition Rule simplifies to:

$P(A \cup B) = P(A) + P(B)$ (for mutually exclusive events)

... (iii)

This is a direct application of the third axiom of probability.

3. Addition Rule for Three Events

The principle can be extended to three events A, B, and C. This is an application of the Principle of Inclusion-Exclusion.

$P(A \cup B \cup C) = P(A) + P(B) + P(C) \ $$ - P(A \cap B) \ $$ - P(B \cap C) \ $$ - P(A \cap C) \ $$ + P(A \cap B \cap C)$

... (iv)

A Venn Diagram showing three overlapping circles A, B, and C, illustrating the Principle of Inclusion-Exclusion. The formula adds the three main circles, subtracts the three pairwise overlaps, and adds back the central overlap.

Intuitive Explanation: To find the total probability (area) covered by the three events:

  1. We first add the probabilities of the three individual events: $P(A) + P(B) + P(C)$.
  2. In doing so, we have double-counted the regions where two events overlap ($A \cap B$, $B \cap C$, and $A \cap C$). So, we must subtract their probabilities once.
  3. However, when we subtracted the three pairwise overlaps, we subtracted the central region where all three events overlap ($A \cap B \cap C$) three times. Since it was initially added three times (once with each event), it has now been completely removed. We must add it back once to get the correct total.

Conditional Probability

Often, the probability of an event changes if we know that another event has already happened. Conditional probability is the probability of an event occurring, given that another event is already known to have occurred.

The conditional probability of event E happening, given that event F has already happened, is written as $P(E|F)$.

Intuitive Explanation: When we know that event F has occurred, our world of possibilities shrinks. The original sample space $S$ is no longer relevant; our new, reduced sample space is now just the event F. The favorable outcomes for E are now only those outcomes that are also inside this new sample space F, which is the intersection $E \cap F$.

A Venn Diagram illustrating conditional probability. The original sample space S is shown, but after knowing F occurred, the sample space reduces to just the circle F. The favorable outcomes are the intersection E and F.

The formula is:

$P(E|F) = \frac{P(E \cap F)}{P(F)}$, provided $P(F) > 0$

... (v)


The Multiplication Rule of Probability

The Multiplication Rule is used to find the probability of an intersection of events. It answers the question: "What is the probability that both event E and event F will occur?" This rule is derived directly by rearranging the formula for conditional probability.

$P(E \cap F) = P(E) \cdot P(F|E)$

... (vi)

In words: The probability of both E and F happening is the probability of E happening, multiplied by the probability of F happening *given that E has already happened*.

Multiplication Rule for Independent Events

Two events are independent if the occurrence of one does not affect the probability of the other. For example, tossing a coin and rolling a die are independent events. If E and F are independent, then knowing F has occurred doesn't change the probability of E, so $P(E|F) = P(E)$.

In this special case, the Multiplication Rule simplifies to:

$P(E \cap F) = P(E) \cdot P(F)$ (for independent events)

... (vii)

Crucial Distinction: Do not confuse "mutually exclusive" with "independent." They are nearly opposite concepts for events with non-zero probability.


Law of Total Probability

This law allows us to find the probability of an event A by considering a set of mutually exclusive and exhaustive events that partition the sample space. Let $E_1, E_2, \dots, E_n$ be events that form a partition of the sample space S (i.e., they are mutually exclusive and their union is S). Then the probability of any event A can be expressed as:

$P(A) = \sum\limits_{i=1}^{n} P(A \cap E_i) = \sum_{i=1}^{n} P(A|E_i)P(E_i)$

... (viii)

A Venn Diagram showing the sample space S partitioned into mutually exclusive and exhaustive events E1, E2, E3. An event A overlaps with all three partitions. The total probability of A is the sum of the probabilities of its intersections with each partition.

Example 1. A card is drawn from a well-shuffled deck of 52 playing cards. What is the probability that the card is either a King or a Spade?

Answer:

Step 1: Define the events and their individual probabilities.

Sample space, $n(S) = 52$.

Let K be the event "the card is a King". There are 4 Kings. $P(K) = \frac{4}{52}$.

Let Sp be the event "the card is a Spade". There are 13 Spades. $P(Sp) = \frac{13}{52}$.

Step 2: Determine if the events are mutually exclusive.

The events are NOT mutually exclusive because it is possible to draw a card that is both a King and a Spade (the King of Spades). We must use the general Addition Rule: $P(K \cup Sp) = P(K) + P(Sp) - P(K \cap Sp)$.

Step 3: Find the probability of the intersection.

The event $K \cap Sp$ is "the card is the King of Spades". There is 1 such card. $P(K \cap Sp) = \frac{1}{52}$.

Step 4: Apply the Addition Rule formula.

$P(K \cup Sp) = \frac{4}{52} + \frac{13}{52} - \frac{1}{52} = \frac{16}{52} = \frac{4}{13}$.

The probability of drawing a King or a Spade is $\frac{4}{13}$.


Example 2. A bag contains 5 red and 3 black balls. Two balls are drawn one after the other without replacement. What is the probability that the first ball is red and the second ball is black?

Answer:

Step 1: Define the events.

Let $R_1$ be the event "the first ball is red".

Let $B_2$ be the event "the second ball is black".

We want to find $P(R_1 \cap B_2)$.

Step 2: Use the Multiplication Rule for Dependent Events.

Since the balls are drawn without replacement, the events are dependent. We use: $P(R_1 \cap B_2) = P(R_1) \cdot P(B_2|R_1)$.

Step 3: Calculate the individual probabilities.

Initially, there are 8 balls total, with 5 red. $P(R_1) = \frac{5}{8}$.

Given that the first ball was red, there are now 7 balls left, with 3 black. $P(B_2|R_1) = \frac{3}{7}$.

Step 4: Apply the rule.

$P(R_1 \cap B_2) = \frac{5}{8} \times \frac{3}{7} = \frac{15}{56}$.

The probability of drawing a red then a black ball is $\frac{15}{56}$.


Example 3. A fair coin is tossed and a fair six-sided die is rolled. What is the probability of getting a Head on the coin and a 6 on the die?

Answer:

Step 1: Define the events and check for independence.

Let H be the event "getting a Head". $P(H) = \frac{1}{2}$.

Let S be the event "getting a 6". $P(S) = \frac{1}{6}$.

The result of the coin toss does not affect the die roll, so the events are independent.

Step 2: Apply the Multiplication Rule for Independent Events.

$P(H \cap S) = P(H) \cdot P(S) = \frac{1}{2} \times \frac{1}{6} = \frac{1}{12}$.

The probability of getting a Head and a 6 is $\frac{1}{12}$.


Example 4. In a class, 30% of students failed in Physics, 25% failed in Mathematics, and 15% failed in both Physics and Mathematics. A student is selected at random. What is the probability that the student failed in Physics or Mathematics?

Answer:

Step 1: Define the events from the given probabilities.

Let P be the event "the student failed in Physics". $P(P) = 0.30$.

Let M be the event "the student failed in Mathematics". $P(M) = 0.25$.

The event "failed in both" is the intersection. $P(P \cap M) = 0.15$.

Step 2: Identify the required probability.

We need to find the probability that the student failed in "Physics or Mathematics", which is the union of the two events, $P(P \cup M)$.

Step 3: Apply the Addition Rule.

$P(P \cup M) = P(P) + P(M) - P(P \cap M)$

$P(P \cup M) = 0.30 + 0.25 - 0.15 = 0.40$.

The probability that the student failed in at least one of the subjects is 0.40 or 40%.